A test script is a file which contains test cases and the set of outcome attributes (both global and entity attributes, including defined tolerances) that will be used by the test cases. Oracle Policy Modeling has an integrated regression tester which can be used to create test scripts so as to compare outcomes from a rulebase with another set of outcomes.
Test scripts use the runtime model of the rulebase so if you make any changes to your rulebase while regression testing you will need to close and re-open your test script for those changes to be reflected in your test script file.
View the details of a test script
Change the platform that the regression tester runs on
To add a new test script file to your project:
TIP: Multiple test scripts can exist in a project. Using a single test script on a large project may present problems if the project is under source control since, generally speaking, only one person can edit a file at a time. To ameliorate this problem multiple test scripts can be defined so that each can be edited separately. Multiple test scripts may also be defined to enable different reports to be created for a given set of test cases and/or to enable the use of different outcome sets for a test script.
A test case is a combination of an input data set and expected results.
Test cases can be created, edited and deleted in Oracle Policy Modeling.
To add a new test case to your test script:
Each project should have a unique naming convention to be used when creating test cases. Some guidelines for establishing a naming convention are given below. The names used for test cases should contain:
Suggested prefixes are given in the table below:
Prefix |
Purpose |
---|---|
unit_ |
Unit test cases to be used by developers. |
formal_ |
Test cases that are derived from the formal test case script set up for the project. |
client_ |
Test cases or use cases specifically requested by the client. |
Other project specific prefixes may be used if required.
The unique identifier for each file will be dependent on the origin of the test case. The suggested approach to creating the unique identifier is:
Origin |
Unique identifier |
---|---|
Unit |
The unique identifier is to include:
For example, the tenth unit test case created by John Smith for Retirement Pensions Category C would be called unit_JSRPC10.xml. This format allows developers to readily identify their own test cases. |
Formal Test Script |
The formal test script is to be maintained by the testing team. Use the unique identifier assigned to the test case in the formal test script. If a test case that is identified as necessary for regression testing has not been previous recorded in the test script, it should be recorded there and assigned an identifier before being added to the regression testing script. This will help to maintain a database of test case IDs and descriptions. The unique identifier obtained from the formal test script will reflect the benefit type/general area of the rulebase that it being tested. For example, RPA01 is the first test case for Retirements Pension Category A. |
Client |
As for unit testing. These cases should have their own identifier, like the unit test cases. Instead of initials, use a unique identifier for the client eg client_DWPRPC02.xml. |
Business Development/Partners |
As for Client. |
TIP: When you open your test case, you can add a description of the test case in the Notes field.
Test cases can also be imported and exported to allow for external creation and editing. See Import test cases from another project and Create a test case from within an interview for more information.
To create a copy of an existing test case in your test script:
Once you have created your new test case, you need to set up the input data for your test case. The input data is the set of data from which the actual results (outcome values) of the test case are generated. The input data contains attribute instances and entity instances, along with the values that should be assigned to them.
The test case editor is used to investigate goals, infer relationships and set values for base level attributes in Oracle Policy Modeling. The test case editor can be accessed by double-clicking a test case on the Test Cases tab in the test script. (The test case editor is very similar to the debugger with a Data view and a Decision view.)
To investigate a goal in the test case editor:
After you have added any entity instances in the test case editor, you can investigate an inferred relationship. To do this:
To set the value of an attribute in the test case editor:
Alternatively, you can double-click the selected attribute to open the Set Attribute Value dialog box and then select the appropriate value, ensuring that it is entered in the correct format.
After setting a value, the list of attribute values in the Data and Decision views will be updated with the value you specified, as well as the values for any other attributes which have been inferred as a result.
Input data can also be created by setting values for attributes in the debugger or Web Determinations and then saving/exporting this data as an XDS file which can then be imported into a test case in Oracle Policy Modeling.
See Create a test case from within an interview for more information.
Once you have created the input data for your test case, you need to specify the expected results for the test case. The expected results is the data set which is matched against the actual results when the input data is loaded into the rulebase. The expected results contains instances of the attributes and entities found in the outcome set. When attributes are added to or deleted from the outcome set, all the expected results of the test cases in that test script will be updated accordingly.
To specify the expected result for an attribute:
Option |
Behavior |
---|---|
Set Expected Value... |
Opens the Edit Expected Result dialog box where you can specify a particular value for the expected result, an expected result of uncertain, or an expected result of unknown. You can also specify change points for the expected result. |
Set Expected Value to Default (<default expected result value>) |
Defaults the expected result to the value specified as the default value in the Edit Outcome dialog box. |
Set Expected Value to Current Value |
Sets the expected value to the current value of the attribute instance. The current value of the attribute instance is shown in angle brackets in the Value column in the Inferred Attributes list. |
Set Expected Value to true |
Sets the expected value to 'true'. (This option is only available for boolean attributes.) |
Set Expected Value to false |
Sets the expected value to 'false'. (This option is only available for boolean attributes.) |
Set Expected Value to Unknown |
Sets the expected value to 'unknown'. |
Set Expected Value to Uncertain |
Sets the expected value to 'uncertain'. |
To do a bulk import of expected results:
A test script will have an outcome set for its test cases and this should contain all the inferred attributes that will be used for the comparisons to determine if the rulebase produces the correct results.
The following types of attributes would be appropriate outcome attributes:
TIP: Too many outcome attributes increases initial start-up time and maintenance overheads, and can make the reports less manageable. The maximum number of outcome attributes should therefore be limited to 10-12 if possible. For unit testing, the choice of outcome attributes may be slightly different as the very nature of unit testing means that intermediate attributes are monitored, rather than the overall end result.
There are two ways to add outcomes to your test script:
Attributes from any entity can be added as outcomes.
The outcome set editor can be accessed by clicking on the Outcomes tab in the test script file.
To add an outcome attribute in the outcome set editor:
TIP: Outcomes can be reordered in the outcome set editor by right-clicking and selecting Move Up or Move Down.
To add an attribute as an outcome from the test case editor:
Outcome attributes are shown underlined in the Inferred Attributes list in the test case editor.
Threshold values tell the regression tester that a given test case should pass if an actual value falls within a specified range. To specify a threshold for an attribute, select the Threshold Value tab in the Edit Outcome dialog.
The following table explains how to set a threshold:
Setting |
Applies to |
Description |
---|---|---|
Value |
Date, currency or number attributes |
A date threshold is defined as a number of days, months or years. A number threshold can be either an absolute value or a percentage. Number and currency thresholds can either be integer or decimal values. |
Apply threshold value to |
Date, currency or number attributes |
Specifies whether the threshold applies above and/or below the expected outcome, as follows:
where X = Actual Result, Y = Expected Result and T= threshold value. |
Ignore |
Specifies whether unknown and or/uncertain values should be ignored, as follows:
|
You can flag an outcome so that any actual value for the outcome will be ignored when the test case is run. This will result in the expected outcome always passing. To do this, select the outcome attribute in the test case editor, right-click and select Ignore Result.
To bulk delete attributes that are no longer used in your rulebase, right-click anywhere in the outcome set editor and select Delete Invalid Outcomes...
NOTE: If an entity no longer exists in the rulebase then all attributes belonging to that entity will be flagged as invalid.
Test cases often need to be reviewed or modified to allow for changes in the rulebase. Changes can be made to individual test cases in the test case editor, or across multiple test scripts and test cases with the Update Test Script Wizard.
To make changes across multiple test scripts and test cases:
This option allows you to insert a value for an attribute which hasn't yet been added to your test cases. This is usually where a new attribute has been added to the rulebase since the last time the test cases were updated.
To insert an attribute:
This option allows you to update the value for an attribute which already exists in your test cases.
To update the value for attribute:
This option allows you to remove an attribute which still exists in your test cases, but has been removed from the rulebase. Alternatively, you can specify an attribute value which should replace it.
To remove or replace missing attributes:
This option allows you to remove or replace any relationships in your test cases which no longer exist in the rulebase.
To remove or replace invalid relationships:
This option allows you to set relationships to known or unknown.
To set the new state of a relationship:
Review your changes on the Summary of Changes screen. Click Back to amend your changes if necessary, then click Next to apply the changes.
You have the option to validate a test script when it is opened and show a warning message if:
To change or view these settings, go to File | Project Properties | Regression Tester Properties | General.
The Test Specification report allows you to view the details of all of your test cases at once. To view the Test Specification report for one or more test scripts:
To remove a test script from a project:
NOTE: The file remains in your file system but has been removed from your Oracle Policy Modeling project. To permanently delete a file from both your file system and from your project, right-click it in Oracle Policy Modeling and select Delete.
To change the runtime platform for the regression tester:
Note that this setting also determines which platform the test script coverage analyzer and the what-if analyzer run on.